When it comes to writing, AI is like a shady used-car salesperson

Buyer beware with generative AI...

When it comes to writing, generative AI can be a bit of a shady used-car salesperson.

You know the type: someone whose ruthless determination to make a sale compels them to promise the impossible. You give them your laundry list of requirements, and miraculously, they just happen to have a vehicle in stock that exactly fits your specifications.

And, because you remind them of their favourite [insert familial relationship here], they’re going to sell it to you for a fraction of the sticker price, as long as you act now! Dazzled, you forego the fine print and sign on the dotted line. It’s only after you’re behind the wheel that you realize all the dials on the dashboard are painted on.

And so it goes with generative AI, the unscrupulous salesperson’s informational equivalent. Ask it anything, and you’ll find it brazenly meeting your every demand, no matter how wild.

Unlike the human salesperson, however, AI doesn’t intend to mislead. It was built to give you what you want, and by all appearances, the product it delivers comes “fully loaded” with credible information. But a few seconds of clicking on links that lead nowhere reveal the truth.

The net result is that AI hallucinations – in the form of made-up sources and facts – render the information consumer similarly hapless, intentional or not.

I don’t mean to imply that AI is not useful in writing and research. It’s just that, as many have already observed, you must be prepared to fact-check. You’re probably familiar with the more disastrous bait-and-switch scenarios brought on by AI hallucinations, which have been well-documented in the legal space in both Canada and the US.

Certainly, these overt fabrications showcase the need for information consumers to scrutinize AI content. But there is another subtler, and arguably more pernicious, form of hallucination that can occur when generative AI is asked to summarize a legitimate source.

I have read many AI-generated summaries in my work as a Communications professor, and what I have found is that AI tools will add in details that aren’t contained in the original.*

Not all the time, mind you. But at a frequency that makes me wary. Especially because, like other AI-fabricated content, these small details inevitably sound like they could have been part of the original text. And it’s this plausibility that makes them dangerous.

The upshot? Whether you’re in the market for a car or an article synopsis, there’s just no substitute for “reading the fine print.” If you use AI to get an overview of a text, make sure that you still read it yourself before referring to its contents in your own work.

Only this way will you know what it is that you’re actually buying.

*Just like an unethical salesperson will promise you an “extra” they aren’t in a position to offer. (I can’t stop milking this metaphor!)

Previous
Previous

AI Prose and Cons: You still have to think!